In the context of Sparkplug edge node/device development, by "data provision", we understand the process through which your code provides data that is published by the Spakrplug edge node or device, ans subsequently subscribed to by Sparkplug host applications.
In the context of Sparkplug edge node/device development, by "data consumption", we understand the process through which your code consumes the commands that it receives from Sparkplug host applications.
Rapid Toolkit for Sparkplug has three major models that you can use when providing data:
The data provision models can be combined in the same edge node/device, i.e. some metrics may use one model, and other metrics a different model.
The Custom Data Provision Model is selected by setting the PublishingInterval Property on the edge node or device to Timeout.Infinite (-1). There is no explicit "switch" to choose between the pull and push data provision models. With great simplification, we can say that if you specify a read method for the Rapid Toolkit for Sparkplug Pull Data Provision Model, this model will be used. If you do not specify the read method, you then need to write the code to gather and push the data, and you end up with the Rapid Toolkit for Sparkplug Push Data Provision Model. Here is (roughly) what happens when Rapid Toolkit for Sparkplug is obtaining the data for Sparkplug publication:
The code for pulling the data can be attached to each metric separately, or you can use a common code on some higher level. For example, a device object can have code that handles all Reads for the metrics contained in the device. For more details, see Request Propagation (Bubbling).
In Rapid Toolkit for Sparkplug, each Sparkplug producer (edge node or device) has a publishing interval associated with it. The publishing interval determines how often the data should be obtained from the underlying source and published to Sparkplug. With the default settings, Rapid Toolkit for Sparkplug periodically polls your data and also publishes them.
If you set the ReportByException Property to true, the polling is not performed, and you need to update each metric's ReadData Property from your code yourself. The periodic publishing still runs, and the modified data is published to Sparkplug automatically.
You can also set the publishing interval to Timeout.Infinite, and then completely take over the responsibility for publishing the data by calling the PublishDataPayload Method whenever you want to publish the data to Sparkplug. In such case, your code will have to assemble the payload before calling the method.
Rapid Toolkit for Sparkplug also has two major models that you can use use when consuming data:
The data consumption models can be combined in the same edge node/device, i.e. some metrics may use one model, and other metrics a different model.
There is no explicit "switch" to choose between the two models. With great simplification, we can say that if you specify a write method for the Rapid Toolkit for Sparkplug Push Data Consumption Model, this model will be used. If you do not specify the write method, you then need to write the code that pulls the data from the metric(s) and sends it to the underlying system, and you end up with Rapid Toolkit for Sparkplug Pull Data Consumption Model. Here is (roughly) what happens when Rapid Toolkit for Sparkplug handles the Sparkplug command:
The code for handling the data push can be attached to each metric separately, or you can use a common code on some higher level; For example, a device object can have code that handles all Writes for the metrics contained in the folder. For more details, see Request Propagation (Bubbling).
When the WriteLoopback Property of the metric is true (the default), any data successfully written to the metric will also update its ReadData Property, and will therefore become the read data for subsequent publication (mainly in Rapid Toolkit for Sparkplug Push Data Provision Model). You can set the WriteLoopback Property of the metric to false to disable this behavior.
The write loopback also allows easy implementation of Sparkplug Read-Write Register Metrics.
In many Sparkplug edge nodes or devices, there are metrics (or groups of metrics) that have basically the same behavior, and differ just in the concrete "details" such as the target data item they represent, or their data type. Obviously your code can define the behavior of each such metric independently, and it will work well. It is, however, often the case that a shorted and more legible code can be achieved by defining the behavior of such metrics all at once.
The request bubbling is a form of request propagation. It is a feature of Rapid Toolkit for Sparkplug that allows you to place the code that define the metric behavior on either the metric level, or on its parent. For metrics that reside directly in the edge node, their parent is the edge node object. For metrics that reside in the device, their parent is the device object.
With request bubbling, the Read or Write attempt is first made on the metric itself. If it is handled, no further request propagation takes place. If it is not handled, the request is sent to the parent node (edge node or device).
For Reads (data provision), the "requests" we are referring to are the invocations of the OnRead Method and the eventual raising of the Read Event. For Writes (data consumption), the "requests" we are referring to are the invocations of the OnWrite Method and the eventual raising of the Write Event.
The request propagation mechanism can be controlled by certain properties on each metric. They are:
When you define a Read or Write (or other) behavior, you have two options:
The end result of both these options is about the same.
Handling the events is somewhat less efficient. Also, your code needs to add the event handler to each object that requires it. Defining a derived class and overriding a method requires more coding, but it allows you to centralize the behavior of certain group of metrics in that class, possibly with other related data or code.
Example: Examples - Sparkplug Edge Node - Metrics reading by method override
Example: Examples - Sparkplug Edge Node - Metrics writing by method override
The following sequence of steps describes in detail what happens when metric data is read by Rapid Toolkit for Sparkplug.
If this sounds too complicated, let's try to put it into more comprehensible terms, although not that precise:
With this, you can see the flexibility provided by the algorithm used. It allows for various data provision approaches, and specifically:
The following sequence of steps describes in detail what happens when the metric data is written to by Rapid Toolkit for Sparkplug as a consequence of Sparkplug command received.
If this sounds too complicated, let's try to put it into more comprehensible terms, although not that precise:
With this, you can see the flexibility provided by the algorithm used. It allows for various data consumption approaches, and specifically:
Sparkplug is a trademark of Eclipse Foundation, Inc. "MQTT" is a trademark of the OASIS Open standards consortium. Other related terms are trademarks of their respective owners. Any use of these terms on this site is for descriptive purposes only and does not imply any sponsorship, endorsement or affiliation.